Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add filters

Language
Document Type
Year range
1.
5th IEEE Advanced Information Management, Communicates, Electronic and Automation Control Conference, IMCEC 2022 ; : 316-322, 2022.
Article in English | Scopus | ID: covidwho-2254697

ABSTRACT

Recently, automatically generating radiology reports has been addressed since it can not only relieve the pressure on doctors but also avoid misdiagnosis. Radiology report generation is a fundamental and critical step of auxiliary diagnosis. Due to the COVID-19 pandemic, a more accurate and robust structure for radiology report generation is urgently needed. Although radiology report generation is achieving remarkable progress, existing methods still face two main shortcomings. On the one hand, the strong noise in medical images usually interferes with the diagnosis process. On the other hand, these methods usually require complex structure while ignoring that efficiency is also an important metric for this task. To solve the two aforementioned problems, we introduce a novel method for medical report generation, the termed attention-guided object dropout MLP(ODM) model. In brief, ODM first incorporates a tailored pre-trained model to pre-align medical regions and corresponding language reports to capture text-related image features. Then, a fine-grained dropout strategy based on the attention matrix is proposed to relieve training pressure by dropping content-irrelevant information. Finally, inspired by the lightweight structure of Multilayer Perceptron(MLP), ODM adopts an MLP-based structure as an encoder to simplify the entire framework. Extensive experiments demonstrate the effectiveness of our ODM. More remarkably, ODM achieves state-of-the-art performance on IU X-Ray, MIMIC-CXR, and ROCO datasets, with the CIDEr-D score being increased from 26.8% to 41.4%, 21.1% to 30.2%, and 9.1% to 19.3%, respectively. © 2022 IEEE.

2.
World Wide Web ; : 1-18, 2022 Aug 27.
Article in English | MEDLINE | ID: covidwho-2242664

ABSTRACT

Medical reports have significant clinical value to radiologists and specialists, especially during a pandemic like COVID. However, beyond the common difficulties faced in the natural image captioning, medical report generation specifically requires the model to describe a medical image with a fine-grained and semantic-coherence paragraph that should satisfy both medical commonsense and logic. Previous works generally extract the global image features and attempt to generate a paragraph that is similar to referenced reports; however, this approach has two limitations. Firstly, the regions of primary interest to radiologists are usually located in a small area of the global image, meaning that the remainder parts of the image could be considered as irrelevant noise in the training procedure. Secondly, there are many similar sentences used in each medical report to describe the normal regions of the image, which causes serious data bias. This deviation is likely to teach models to generate these inessential sentences on a regular basis. To address these problems, we propose an Auxiliary Signal-Guided Knowledge Encoder-Decoder (ASGK) to mimic radiologists' working patterns. Specifically, the auxiliary patches are explored to expand the widely used visual patch features before fed to the Transformer encoder, while the external linguistic signals help the decoder better master prior knowledge during the pre-training process. Our approach performs well on common benchmarks, including CX-CHR, IU X-Ray, and COVID-19 CT Report dataset (COV-CTR), demonstrating combining auxiliary signals with transformer architecture can bring a significant improvement in terms of medical report generation. The experimental results confirm that auxiliary signals driven Transformer-based models are with solid capabilities to outperform previous approaches on both medical terminology classification and paragraph generation metrics.

3.
19th IEEE International Symposium on Biomedical Imaging, ISBI 2022 ; 2022-March, 2022.
Article in English | Scopus | ID: covidwho-1846116

ABSTRACT

Automatic medical report generation is an emerging field that aims to generate medical reports based on medical images. The report writing process can be tedious for senior radiologists and challenging for junior ones. Thus it is of great importance to expedite the process. In this work, we propose an EnricheD DIsease Embedding based Transformer (Eddie-Transformer) model, which jointly performs disease detection and medical report generation. This is done by decoupling the latent visual features into semantic disease embeddings and disease states via our state-aware mechanism. Then, our model entangles the learned diseases and their states, enabling explicit and precise disease representations. Finally, the Transformer model receives the enriched disease representations to generate high-quality medical reports. Our approach shows promising results on the widely-used Open-I benchmark and COVID-19 dataset. © 2022 IEEE.

SELECTION OF CITATIONS
SEARCH DETAIL